Google was forced to stop its image generation AI tool this week as people complained about the results it was giving.AI 

Explanation from Google on the Error Made by Gemini AI in Image Tool

Google has been slow to adopt AI, and this week’s episode explains the reasons for its hesitancy. The company was forced to suspend its AI image creation tool after it made mistakes with historical figures. Many people were not impressed with the results provided by Google’s Gemini AI image generator, which not only made the company sit down and stop making it available, but also admit the mistakes it made.

So how did Google’s AI technology get it so wrong, and why did it happen? Google’s Prabhakar Raghavan has tried to explain the problem in this blog post, highlighting the troubling concerns that AI continues to pose to the world.

He mentions that the AI tool had a hard time distinguishing many people from each other, and the AI model became extra careful to avoid making big mistakes and being offensive to anyone. “These two things caused the model to overcompensate in some cases and be too conservative in others, resulting in images that were awkward and wrong,” Raghavan notes.

Google’s reasons make sense, but the AI model’s need to confuse its commands instead of using its own mind to achieve a result makes sense. AI models are trained using large datasets, but even then these tools have a hard time with prompts related to ethnicity and gender or even historical facts.

For example, the AI cannot mix up the characters of World War II German soldiers and give them a different ethnicity. Google has understandably decided to keep the AI model in learning mode to get these facts right in the future.

The company has feared these problems in the past, and now that it has become a reality, changes are needed before concerns about AI spread further.

Related posts

Leave a Comment